75 research outputs found

    Runtime home mapping for effective memory resource usage

    Full text link
    In tiled Chip Multiprocessors (CMPs) last-level cache (LLC) banks are usually shared but distributed among the tiles. A static mapping of cache blocks to the LLC banks leads to poor efficiency since a block may be mapped away from the tiles actually accessing it. Dynamic policies either rely on the static mapping of blocks to a set of banks (D-NUCA) or rely on the OS to dynamically load pages to statically mapped addresses (first-touch). In this paper, we propose Runtime Home Mapping (RHM), a new dynamic approach where the LLC home bank is determined at runtime by the memory controller when the block is fetched from main memory, trying to map each block as close as possible to the requestor thus speeding up execution time and lowering message latencies. Block migration and replication provide further improvements to basic RHM. Also, in a further optimization we eliminate the directory structure. All these optimizations involve specific NoC optimizations and co-designs. Results with PARSEC and SPLASH-2 applications show, when compared with alternative solutions, that RHM achieves a 41% and 35% average reduction in load and store latencies respectively compared to static mapping. This leads to an average reduction of 28% in applications execution.Lodde, M.; Flich Cardo, J. (2014). Runtime home mapping for effective memory resource usage. Microprocessors and Microsystems. 38(4):276-291. doi:10.1016/j.micpro.2014.03.008S27629138

    CUTBUF: Buffer Management and Router Design for Traffic Mixing in VNET-Based NoCs

    Get PDF
    "© 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works[EN] Router's buffer design and management strongly influence energy, area and performance of on-chip networks, hence it is crucial to encompass all of these aspects in the design process. At the same time, the NoC design cannot disregard preventing network-level and protocol-level deadlocks by devoting ad-hoc buffer resources to that purpose. In chip multiprocessor systems the coherence protocol usually requires different virtual networks (VNETs) to avoid deadlocks. Moreover, VNETutilization is highly unbalanced and there is no way to share buffers between them due to the need to isolate different traffic types. This paper proposes CUTBUF, a novel NoC router architecture to dynamically assign virtual channels (VCs) to VNETs depending on the actual VNETs load to significantly reduce the number of physical buffers in routers, thus saving area and power without decreasing NoC performance. Moreover, CUTBUF allows to reuse the same buffer for different traffic types while ensuring that the optimized NoC is deadlock-free both at network and protocol level. In this perspective, all the VCs are considered spare queues not statically assigned to a specific VNETand the coherence protocol only imposes a minimum number of queues to be implemented. Synthetic applications as well as real benchmarks have been used to validate CUTBUF, considering architectures ranging from 16 up to 48 cores. Moreover, a complete RTL router has been designed to explore area and power overheads. Results highlight how CUTBUF can reduce router buffers up to 33 percent with 2 percent of performance degradation, a 5 percent of operating frequency decrease and area and power saving up to 30.6 and 30.7 percent, respectively. Conversely, the flexibility of the proposed architecture improves by 23.8 percent the performance of the baseline NoC router when the same number of buffers is used.Zoni, D.; Flich Cardo, J.; Fornaciari, W. (2016). CUTBUF: Buffer Management and Router Design for Traffic Mixing in VNET-Based NoCs. IEEE Transactions on Parallel and Distributed Systems. 27(6):1603-1616. doi:10.1109/TPDS.2015.2468716S1603161627

    Transient and Permanent Error Control for High-End Multiprocessor Systems-on-Chip

    Get PDF
    High-end MPSoC systems with built-in high-radix topologies achieve good performance because of the improved connectivity and the reduced network diameter. In high-end MPSoC systems, fault tolerance support is becoming a compulsory feature. In this work, we propose a combined method to address permanent and transient link and router failures in those systems. The LBDRhr mechanism is proposed to tolerate permanent link failures in some popular high-radix topologies. The increased router complexity may lead to more transient router errors than routers using simple XY routing algorithm. We exploit the inherent information redundancy (IIR) in LBDRhr logic to manage transient errors in the network routers. Thorough analyses are provided to discover the appropriate internal nodes and the forbidden signal patterns for transient error detection. Simulation results show that LBDRhr logic can tolerate all of the permanent failure combinations of long-range links and 80% of links failures at short-range links. Case studies show that the error detection method based on the new IIR extraction method reduces the power consumption and the residual error rate by 33% and up to two orders of magnitude, respectively, compared to triple modular redundancy. The impact of network topologies on the efficiency of the detection mechanism has been examined in this work, as well

    Toward matrix multiplication for deep learning inference on the Xilinx Versal

    Full text link
    The remarkable positive impact of Deep Neural Networks on many Artificial Intelligence (AI) tasks has led to the development of various high performance algorithms as well as specialized processors and accelerators. In this paper we address this scenario by demonstrating that the principles underlying the modern realization of the general matrix multiplication (GEMM) in conventional processor architectures, are also valid to achieve high performance for the type of operations that arise in deep learning (DL) on an exotic accelerator such as the AI Engine (AIE) tile embedded in Xilinx Versal platforms. In particular, our experimental results with a prototype implementation of the GEMM kernel, on a Xilinx Versal VCK190, delivers performance close to 86.7% of the theoretical peak that can be expected on an AIE tile, for 16-bit integer operands.Comment: 11 page

    Area-efficient snoopy-aware NoC design for high-performance chip multiprocessor systems

    Get PDF
    Manycore CMP systems are expected to grow to tens or even hundreds of cores. In this paper we show that the effective co-design of both, the network-on-chip and the coherence protocol, improves performance and power meanwhile total area resources remain bounded. We propose a snoopy-aware network-on-chip topology made of two mesh-of-tree topologies. Reducing the complexity of the coherence protocol - and hence its resources - and moving this complexity to the network, leads to a global decrease in power consumption meanwhile area is barely affected. Benefits of our proposal are due to the high-throughput and low delay of the network, but also due to the simplicity of the coherence protocol. The proposed network and protocol minimizes communication amongst cores when compared to traditional solutions based either on 2D-mesh topologies or in directory-based protocols. (C) 2015 Elsevier Ltd. All rights reserved.Roca Pérez, A.; Hernández Luz, C.; Lodde, M.; Flich Cardo, J. (2015). Area-efficient snoopy-aware NoC design for high-performance chip multiprocessor systems. Computers and Electrical Engineering. 45:374-385. doi:10.1016/j.compeleceng.2015.04.020S3743854

    Enforcing Predictability of Many-cores with DCFNoC

    Get PDF
    © 2021 IEEE. Personal use of this material is permitted. Permissíon from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertisíng or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.[EN] The ever need for higher performance forces industry to include technology based on multi-processors system on chip (MPSoCs) in their safety-critical embedded systems. MPSoCs include a network-on-chip (NoC) to interconnect the cores between them and with memory and the rest of shared resources. Unfortunately, the inclusion of NoCs compromises guaranteeing time predictability as network-level conflicts may occur. To overcome this problem, in this paper we propose DCFNoC, a new time-predictable NoC design paradigm where conflicts within the network are eliminated by design. This new paradigm builds on top of the Channel Dependency Graph (CDG) in order to deterministically avoid network conflicts. The network guarantees predictability to applications and is able to naturally inject messages using a TDM period equal to the optimal theoretical bound without the need of using a computationally demanding offline process. DCFNoC is integrated in a tile-based many-core system and adapted to its memory hierarchy. Our results show that DCFNoC guarantees time predictability avoiding network interference among multiple running applications. DCFNoC always guarantees performance and also improves wormhole performance in a 4 × 4 setting by a factor of 3.7× when interference traffic is injected. For a 8 × 8 network differences are even larger. In addition, DCFNoC obtains a total area saving of 10.79% over a standard wormhole implementation.This work has been supported by MINECO under Grant BES-2016-076885, by MINECO and funds from the European ERDF under Grant TIN2015-66972-C05-1-R and Grant RTI2018-098156-B-C51, and by the EC H2020 RECIPE project under Grant 801137.Picornell-Sanjuan, T.; Flich Cardo, J.; Hernández Luz, C.; Duato Marín, JF. (2021). Enforcing Predictability of Many-cores with DCFNoC. IEEE Transactions on Computers. 70(2):270-283. https://doi.org/10.1109/TC.2020.2987797S27028370

    Prácticas Experimentales de Memorias Cache

    Get PDF
    El conocimiento de las memorias cache se considera básico e imprescindible en la formación de cualquier titulado universitario (ya sea técnico o superior) en Informática. Su estudio se puede abordar desde distintos puntos de vista. Por una parte, desde un punto de vista teórico describiendo su funcionamiento básico: cómo se localiza un bloque almacenado, qué bloque debe reemplazarse, etc. Por otra parte, desde un punto de vista práctico, comprobando el funcionamiento estudiado, normalmente mediante un simulador. Este artículo se enmarca dentro del punto de vista práctico y propone el uso de prácticas experimentales para el estudio de las memorias cache como complemento a los simuladores. Se trata de utilizar un sencillo programa de medida de tiempos de acceso al sistema de memoria, para comprobar experimentalmente el efecto del sistema de cache sobre las prestaciones de un computador personal. Mediante la realización de las prácticas el alumno deduce cuestiones como: ¿cuántos niveles de cache tiene el computador?, ¿cuántas vías tiene cada uno de ellos?, ¿cuál es su tamaño de bloque?, etc. Estas prácticas se han realizado por primera vez durante el presente curso académico en la Escuela Técnica Superior de Informática Aplicada y en la Facultad de Informática de la Universidad Politécnica de Valencia con un buen grado de aceptación tanto por parte del profesorado como del alumnado

    HP-DCFNoC: High Performance Distributed Dynamic TDM Scheduler Based on DCFNoC Theory

    Full text link
    (c) 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.[EN] The need for increasing the performance of critical real-time embedded systems pushes the industry to adopt complex multi-core processor designs with embedded networks-on-chip. In this paper we present hp-DCFNoC, a distributed dynamic scheduler design that by relying on the key properties of a delayed confict-free NoC (DCFNoC) is able to achieve peak performance numbers very close to a wormhole-based NoC design without compromising its real-time guarantees. In particular, our results show that the proposed scheduler achieves an overall throughput improvement of 6.9x and 14.4x over a baseline DCFNoC for 16 and 64-node meshes, respectively. When compared against a standard wormhole router 95% of its network throughput is preserved while strict timing predictability as property is kept. This achievement opens the door to new high performance time predictable NoC designs.This work was supported in part by the Secretara de Estado de Investigacin Desarrollo e Innovacin (MINECO) under Grant BES-2016-076885, in part by the European Regional Development Fund (ERDF) under Grant TIN2015-66972-C05-1-R and Grant RTI2018-098156-B-C51, and in part by the EC H2020 European Institute of Innovation and Technology (SELENE) Project under Grant 871467.Picornell-Sanjuan, T.; Flich Cardo, J.; Duato Marín, JF.; Hernández Luz, C. (2020). HP-DCFNoC: High Performance Distributed Dynamic TDM Scheduler Based on DCFNoC Theory. IEEE Access. 8:194836-194849. https://doi.org/10.1109/ACCESS.2020.3033853S194836194849
    • …
    corecore